325 research outputs found

    An Outline of a Neural Architecture for Unified Visual Contrast and Brightness Perception

    Full text link
    In this contribution a neural architecture is proposed that serves as a framework for further empirical as well as modeling investigations into a unified theory for contrast, contour and lnightncss perception. The computational mechanisms utilize a center-surround antagonism based on shunting interactions which allow to multiples contrast. as well as luminance data. As a key new feature, this data is demultiplexed at a later stage into segrcgated processing streams that signal both local contrast information of each polarity, and a scaled, low-pass filted and compressed version of the luminance information respectively. In correspondence with recent findings about the major processing channels in the primary visual system, the ON and OFF contrast channels feed into a subsystem for contrast processing, perceptual organization, and grouping (boundary contour system, BCS). The activity in the Segregated luminance path, however, is hypothesized to he contrast enhanced via shunting interaction, utilized hy the coc~xisLing contrast. channels. Following Grossbergs FACADE architecture, it is suggested that activity generated in the BCS acts as a modulation mechanism that controls the local diffusion coefficients for lateral activity spreading within the segregated brightness&darkness (B&D) channel. A three stage process is suggested for brightness reconstruction and filling-in. Based on the segregation of ON and OFF contrast information and basic neural principles such as divergence, convergence, and pooling, the nrodel accounts for the linear response properties of odd and even symetric simple and complex cells in VI. Theoretical analysis of the network's function at various stages of processing, provides a framework for quantitative studies referring to available data on visual perception.German Ministry of Research and Technology (413-5839-01 1N 101 C/1

    A Simple Cell Model with Multiple Spatial Frequency Selectivity and Linear/Non-Linear Response Properties

    Full text link
    A model is described for cortical simple cells. Simple cells are selective for local contrast polarity, signaling light-dark and dark-light transitions. The proposed new architecture exhibits both linear and non-linear properties of simple cells. Linear responses are obtained by integration of the input stimulus within subfields of the cells, and by combinations of them. Non-linear behavior can be seen in the selectivity for certain features that can be characterized by the spatial arrangement of activations generated by initial on- and off-cells (center-surround). The new model also exhibits spatial frequency selectivity with the generation of multi-scale properties being based on a single-scale band-pass input that is generated by the initial (retinal) center-surround processing stage.German BMFT grant (413-5839-01 IN 101 C/1); CNPq and NUTES/UFRJ, Brazi

    A Multi-Scale Network Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of difficult data, including the classical phenomenon of Mach bands and nonlinear contrast effects associated with sinusoidal luminance waves. The model builds upon previous work by Grossberg and colleagues on filling-in models that predict brightness perception through the interaction of boundary and feature signals. Model equations are presented and computer simulations illustrate the model's potential.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 IN 101 C/1); CNPq and NUTES/UFRJ, Brazi

    A Contrast- and Luminance-Driven Multiscale Netowrk Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous lumi- nance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring." Computer simulations illustrate the model's competencies.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 1N 101 C/1); CNPq and NUTES/UFRJ, Brazi

    Detection of first and second order motion

    Get PDF
    A model of motion detection is presented. The model contains three stages. The first stage is unoriented and is selective for contrast polarities. The next two stages work in parallel. A phase insensitive stage pools across different contrast polarities through a spatiotemporal filter and thus can detect first and second order motion. A phase sensitive stage keeps contrast polarities separate, each of which is filtered through a spatiotemporal filter, and thus only first order motion can be detected. Differential phase sensitivity can therefore account for the detection of first and second order motion. Phase insensitive detectors correspond to cortical complex cells, and phase sensitive detectors to simple cells

    Extraction of Surface-Related Features in a Recurrent Model of V1-V2 Interactions

    Get PDF
    Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces.In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches.A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments

    A Dynamical Model of Binding in Visual Cortex During Incremental Grouping and Search

    Get PDF
    Binding of visual information is crucial for several perceptual tasks. To incrementally group an object, elements in a space-feature neighborhood need to be bound together starting from an attended location (Roelfsema, TICS, 2005). To perform visual search, candidate locations and cued features must be evaluated conjunctively to retrieve a target (Treisman&Gormican, Psychol Rev, 1988). Despite different requirements on binding, both tasks are solved by the same neural substrate. In a model of perceptual decision-making, we give a mechanistic explanation for how this can be achieved. The architecture consists of a visual cortex module and a higher-order thalamic module. While the cortical module extracts stimulus features across a hierarchy of spatial scales, the thalamic module provides a purely spatial relevance map. Both modules interact bidirectionally to enter locations of task-relevance into thalamus, while allowing integration of context with local features within cortical maps. This integration realizes the model\u27s binding mechanism. It is implemented by pyramidal neurons with dynamical basal and apical compartments performing coincidence detection (Larkum, TINS, 2013). The basal compartment is driven by bottom-up feature information and produces the neuron\u27s output, the apical compartment computes top-down contextual information akin to an interaction skeleton (Roelfsema&Singer, Cereb Cortex, 1998). Apical-basal integration yields an up-modulation in neuron activity binding it to the attended configuration. Gating information from thalamus restricts this integration to task-relevant locations (Saalman&Kastner, Curr Op Neurobiol, 2009). By model simulations, we show how altering the apical compartment\u27s operation regime steers binding to either perform search or incremental grouping
    • …
    corecore